A self hosted virtual browser solution that runs in docker it is meant to replace rabb.it.
On any debian-based system, simply run one of the following 2 commands to install docker:
root@docker0:~# apt search docker.io
Sorting... Done
Full Text Search... Done
docker-doc/stable,stable 18.09.1+dfsg1-7.1+deb10u2 all
Linux container runtime -- documentation
docker.io/stable,stable,now 18.09.1+dfsg1-7.1+deb10u2 amd64 [installed]
Linux container runtime
python-docker/stable 3.4.1-4 all
Python wrapper to access docker.io's control socket
python3-docker/stable,now 3.4.1-4 all [installed,automatic]
Python 3 wrapper to access docker.io's control socket
ruby-docker-api/stable 1.22.2-1 all
Ruby gem to interact with docker.io remote API
root@docker0:~# apt install docker.io -y
OR
root@docker0:~# curl -sSL https://get.docker.com/ | CHANNEL=stable bash
Once docker is installed you should get the following:
root@docker0:~# which docker
/usr/bin/docker
root@docker0:~# docker -v
Docker version 18.09.1, build 4c52b90
From there, you can check which containers are currently active:
root@docker0:~# docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
86959b1d649a kutt/kutt "docker-entrypoint.s…" 4 months ago Up 18 minutes 0.0.0.0:3000->3000/tcp kutt_kutt_1
5411baddadcf postgres:12-alpine "docker-entrypoint.s…" 4 months ago Up 18 minutes 5432/tcp kutt_postgres_1
a685c6747987 redis:6.0-alpine "docker-entrypoint.s…" 4 months ago Up 18 minutes 6379/tcp kutt_redis_1
f33ae4911086 wonderfall/searx "run.sh" 5 months ago Up 18 minutes 0.0.0.0:9999->8888/tcp searx2
0ab72043d028 wonderfall/searx "--restart=always" 5 months ago Created 0.0.0.0:9999->8888/tcp searx
eb743d8f0703 nihilist666/dillinger:3.37.2 "/usr/local/bin/dumb…" 5 months ago Up 18 minutes 0.0.0.0:8000->8080/tcp dillinger
root@docker0:~/neko# docker search neko
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
nurdism/neko Self hosted virtual browser 7
nekottyo/kustomize-kubeval kubectl, kustomize, kubeval 2
nekottyo/hub-command Docker image for https://github.com/github/h… 1 [OK]
m1k1o/neko Fork of https://github.com/nurdism/neko/ 1
nekonyuu/kafka-builder Kafka buiding container ! 1 [OK]
nekorpg/dodontof どどんとふ x H2O 0 [OK]
nekorpg/nekochat Web chat application for tabletop role-playi… 0 [OK]
nekoyume/nekoyume Decentralized MMORPG based on Dungeon World … 0 [OK]
nekoffski/dashboo-api-gw-nightly Nightly builds of api gateway 0
nekonyuu/ubuntu-devel-py-sci Development Dockers for Ubuntu (with Python … 0 [OK]
nekonyuu/cerebro 0
nekoffski/dashboo-log-server-nightly Logging server for dashboo 0
nekokatt/stackless-python-hikari Stackless Python build for x86 for Hikari pi… 0
nekoffski/dashboo-syncer-nightly Nightly builds of dashboo syncer 0
nekorpg/nekoboard Web whiteboard application for tabletop role… 0 [OK]
nekonyuu/collectd-builder Collectd buiding container ! 0 [OK]
nekoruri/norikra 0 [OK]
nekoserv/base-sabnzbd Base image for sabnzbd 0
nekohasekai/nekox-build-script 0
nekometer/nekotaku 0
graywhale/neko 0
nekoruri/fluentd-twitter-bigquery 0
nekoaddict/karuta 0
nekoneko/centos6-ruby CentOS6 ruby image 0 [OK]
nekonoshippo/rtdemo 0
root@docker0:~/neko#
I already used that debian VM to have a few containers up and running, but we're interested in the neko container:
Let's get neko's
root@docker0:~# ls -lsh
total 12K
4.0K drwxr-xr-x 11 root root 4.0K Nov 1 09:45 dillinger
4.0K drwxr-xr-x 7 root root 4.0K Nov 29 17:08 kutt
root@docker0:~# mkdir neko
root@docker0:~# cd neko
root@docker0:~/neko# wget https://raw.githubusercontent.com/nurdism/neko/master/.examples/simple/docker-compose.yaml
root@docker0:~/neko# vim docker-compose.yaml
version: "2.0"
services:
neko:
image: nurdism/neko:firefox
restart: always
shm_size: "1gb"
ports:
- "80:8080"
- "59000-59100:59000-59100/udp"
environment:
DISPLAY: :99.0
NEKO_PASSWORD: neko
NEKO_PASSWORD_ADMIN: admin
NEKO_BIND: :8080
NEKO_EPR: 59000-59100
NEKO_NAT1TO1: 192.168.0.200
Be default, neko assumes that you're going to use the public IP, so make sure to specify it's local IP with the NEKO_NAT1TO1 environment variable. If you want to use neko directly from a public IP address, then remove that parameter.
Now by default, neko redirects it's own 8080 port to the host's port 80, it has 2 passwords, one for the users (default is 'neko') and one for the admins (default is 'admin'), edit it if you want, then :wq to save and quit out of vim, then use the docker-compose.yaml file to build the container:
root@docker0:~/neko# docker-compose up -d
Creating network "neko_default" with the default driver
Pulling neko (nurdism/neko:firefox)...
firefox: Pulling from nurdism/neko
804555ee0376: Downloading [========================================> ] 18.38MB/22.52MB
f3b26a078a5f: Downloading [==========> ] 12.94MB/62.35MB
c7e3e1771f69: Downloading [===========================> ] 10.37MB/18.82MB
01b5d8f1086c: Waiting
61bf5b264b09: Waiting
95369768d555: Waiting
7bfe74d8b679: Waiting
68ce98038604: Waiting
10efbff0f24f: Waiting
d899a33175af: Waiting
2ab7756db6a1: Waiting
1019839afc2b: Waiting
6bff0ee4124c: Waiting
1703d7743579: Pulling fs layer
71e3127fa99a: Pulling fs layer
050a3eb4e0d5: Pulling fs layer
[...]
Digest: sha256:a191ca218b72f19da9e111c16312c6209bbd8e5e744dee657920214dca665354
Status: Downloaded newer image for nurdism/neko:firefox
Creating neko_neko_1 ... done
wait for it to complete, and then check the result:
root@docker0:~/neko# docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8ff1638fea9b nurdism/neko:firefox "/usr/bin/supervisor…" About a minute ago Up About a minute 0.0.0.0:59000-59100->59000-59100/udp, 0.0.0.0:80->8080/tcp neko_neko_1
[...]
[ 10.0.0.10/16 ] [ /dev/pts/27 ] [Github/blog/servers]
→ nmap -sCV -p80 192.168.0.200
Starting Nmap 7.91 ( https://nmap.org ) at 2021-04-18 07:32 CEST
Nmap scan report for 192.168.0.200
Host is up (0.0022s latency).
PORT STATE SERVICE VERSION
80/tcp open http Golang net/http server (Go-IPFS json-rpc or InfluxDB API)
|_http-title: n.eko
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 11.68 seconds
As expected, now port 80 is opened with our neko instance, so let's check it out:
Logged in as a regular neko user, we can use the neko default password, but we want to have access to the commands, so we use the admin default password:
Here you see that this is a linux container, running with Firefox 68.0
Now we're going to first delete our neko container, because we don't want just 1 public neko instance, but 3 of them. So we will need to edit the docker-compose file:
root@docker0:~/neko# docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
05d3bfecd1fd nurdism/neko:firefox "/usr/bin/supervisor…" 11 minutes ago Up 11 minutes 0.0.0.0:59000-59100->59000-59100/udp, 0.0.0.0:80->8080/tcp neko_neko_1
[...]
root@docker0:~/neko# docker container stop 05d
869
root@docker0:~/neko# docker container rm 05d
root@docker0:~/neko# vim docker-compose.yaml
version: "2.0"
services:
neko1:
image: nurdism/neko:firefox
restart: always
shm_size: "3gb"
ports:
- "8081:8080"
- "59001-59100:59001-59100/udp"
environment:
DISPLAY: :99.0
NEKO_PASSWORD: neko
NEKO_PASSWORD_ADMIN: P@SSW0RD
NEKO_BIND: :8080
NEKO_EPR: 59001-59100
NEKO_NAT1TO1: 192.168.0.200
neko2:
image: nurdism/neko:firefox
restart: always
shm_size: "3gb"
ports:
- "8082:8080"
- "59101-59200:59101-59200/udp"
environment:
DISPLAY: :99.0
NEKO_PASSWORD: neko
NEKO_PASSWORD_ADMIN: P@SSW0RD
NEKO_BIND: :8080
NEKO_EPR: 59101-59200
NEKO_NAT1TO1: 192.168.0.200
neko3:
image: nurdism/neko:firefox
restart: always
shm_size: "3gb"
ports:
- "8083:8080"
- "59201-59300:59201-59300/udp"
environment:
DISPLAY: :99.0
NEKO_PASSWORD: neko
NEKO_PASSWORD_ADMIN: P@SSW0RD
NEKO_BIND: :8080
NEKO_EPR: 59201-59300
NEKO_NAT1TO1: 192.168.0.200
:wq
root@docker0:~/neko# docker-compose up -d
Creating neko_neko2_1 ... done
Creating neko_neko1_1 ... done
Creating neko_neko3_1 ... done
root@docker0:~/neko# docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2aca6086627a nurdism/neko:firefox "/usr/bin/supervisor…" About a minute ago Up 17 seconds 0.0.0.0:59201-59300->59201-59300/udp, 0.0.0.0:8083->8080/tcp neko_neko3_1
876c5cf199bf nurdism/neko:firefox "/usr/bin/supervisor…" About a minute ago Up 17 seconds 0.0.0.0:59001-59100->59001-59100/udp, 0.0.0.0:8081->8080/tcp neko_neko1_1
7de701fd022e nurdism/neko:firefox "/usr/bin/supervisor…" About a minute ago Up 18 seconds 0.0.0.0:59101-59200->59101-59200/udp, 0.0.0.0:8082->8080/tcp neko_neko2_1
[ 10.0.0.10/16 ] [ /dev/pts/27 ] [Github/blog/servers]
→ nmap -sCV -p8081-8083 192.168.0.200
Starting Nmap 7.91 ( https://nmap.org ) at 2021-04-18 08:34 CEST
Nmap scan report for 192.168.0.200
Host is up (0.0027s latency).
PORT STATE SERVICE VERSION
8081/tcp open http Golang net/http server (Go-IPFS json-rpc or InfluxDB API)
|_http-title: n.eko
8082/tcp open http Golang net/http server (Go-IPFS json-rpc or InfluxDB API)
|_http-title: n.eko
8083/tcp open http Golang net/http server (Go-IPFS json-rpc or InfluxDB API)
|_http-title: n.eko
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 7.18 seconds
This goes without saying, if you want to make something public, make sure you secure the passwords. our 3 neko instances will be at 192.168.0.200:8081,8082,8083, and each of them must be reachable from a public ip / domain name. To do so we will use a nginx reverse proxy, basically this is going to be a debian machine, where the 80/443 ports are accessible from a public IP address and so from a domain name, nginx's role is going to get the local services and serve them publicly, each of them under a sub-domain name, and ideally under a TLS encryption. So let's set that up using acme.sh on my main nginx node:
First things first, get the correct A DNS records to point to the server's IP public IP address, if the root domain already points to the right ip, you can use a CNAME DNS record to the root domain like i do:
[ 10.0.0.10/16 ] [ /dev/pts/27 ] [Github/blog/servers]
→ for i in {1..3}; do ping neko$i.void.yt -c1; done
PING void.yt (85.171.172.151) 56(84) bytes of data.
64 bytes from cryptpad.void.yt (85.171.172.151): icmp_seq=1 ttl=63 time=4.00 ms
--- void.yt ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 3.999/3.999/3.999/0.000 ms
PING void.yt (85.171.172.151) 56(84) bytes of data.
64 bytes from cryptpad.void.yt (85.171.172.151): icmp_seq=1 ttl=63 time=3.09 ms
--- void.yt ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 3.091/3.091/3.091/0.000 ms
PING void.yt (85.171.172.151) 56(84) bytes of data.
64 bytes from cryptpad.void.yt (85.171.172.151): icmp_seq=1 ttl=63 time=8.99 ms
--- void.yt ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 8.987/8.987/8.987/0.000 ms
Now that the 3 subdomains work properly, set the appropriate subdomain nginx config for each of them:
[ 10.0.0.10/16 ] [ /dev/pts/27 ] [Github/blog/servers]
→ ssh root@10.0.0.101
root@10.0.0.101's password:
Linux home 4.19.0-16-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sun Apr 18 08:41:27 2021 from 10.0.0.10
root@home:~# vim /etc/nginx/sites-available/neko1.void.yt.conf
upstream neko1backend {
server 192.168.0.200:8081;
}
server {
listen 80;
listen [::]:80;
server_name neko1.void.yt;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name neko1.void.yt;
ssl_certificate /root/.acme.sh/neko1.void.yt/fullchain.cer;
ssl_trusted_certificate /root/.acme.sh/neko1.void.yt/neko1.void.yt.cer;
ssl_certificate_key /root/.acme.sh/neko1.void.yt/neko1.void.yt.key;
ssl_protocols TLSv1.3 TLSv1.2;
ssl_ciphers 'TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_session_tickets off;
ssl_ecdh_curve auto;
ssl_stapling on;
ssl_stapling_verify on;
resolver 80.67.188.188 80.67.169.40 valid=300s;
resolver_timeout 10s;
add_header X-XSS-Protection "1; mode=block"; #Cross-site scripting
add_header X-Frame-Options "SAMEORIGIN" always; #clickjacking
add_header X-Content-Type-Options nosniff; #MIME-type sniffing
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload";
location / {
proxy_pass http://neko1backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
:wq
root@home:~# vim /etc/nginx/sites-available/neko2.void.yt.conf
root@home:~# vim /etc/nginx/sites-available/neko3.void.yt.conf
root@home:~# ln -s /etc/nginx/sites-available/neko1.void.yt.conf /etc/nginx/sites-enabled/
root@home:~# ln -s /etc/nginx/sites-available/neko2.void.yt.conf /etc/nginx/sites-enabled/
root@home:~# ln -s /etc/nginx/sites-available/neko3.void.yt.conf /etc/nginx/sites-enabled/
root@home:~# nginx -t
nginx: [emerg] BIO_new_file("/root/.acme.sh/neko1.void.yt/fullchain.cer") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/root/.acme.sh/neko1.void.yt/fullchain.cer','r') error:2006D080:BIO routines:BIO_new_file:no such file)
nginx: configuration file /etc/nginx/nginx.conf test failed
Now when you test the configs, you see that it isn't working so well, that's because we don't have the TLS certificates yet, so let's get them:
root@home:~# systemctl stop nginx
root@home:~# acme.sh --issue --standalone -d neko1.void.yt -k 4096
root@home:~# acme.sh --issue --standalone -d neko2.void.yt -k 4096
root@home:~# acme.sh --issue --standalone -d neko3.void.yt -k 4096
root@home:~# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Once that's done, start nginx again and see the result:
root@home:~# systemctl start nginx
And finally see the result:
And that's it! We managed to setup 3 public neko docker instances thanks to our nginx reverse proxy.
Until there is Nothing left.
Creative Commons Zero: No Rights Reserved
Donate XMR: 8AUYjhQeG3D5aodJDtqG499N5jXXM71gYKD8LgSsFB9BUV1o7muLv3DXHoydRTK4SZaaUBq4EAUqpZHLrX2VZLH71Jrd9k8
Contact: nihilist@contact.nowhere.moe (PGP)